2 results
350 Investigating the Architecture of Speech Processing Pathways in the Brain
- Plamen Nikolov, Skrikanth Damera, Noah Steinberg, Naama Zur, Lillian Chang, Kyle Yoon, Marcus Dreux, Peter Turkeltaub, Josef Rauschecker, Maximilian Riesenhuber
-
- Journal:
- Journal of Clinical and Translational Science / Volume 6 / Issue s1 / April 2022
- Published online by Cambridge University Press:
- 19 April 2022, pp. 64-65
-
- Article
-
- You have access Access
- Open access
- Export citation
-
OBJECTIVES/GOALS: Speech production requires mapping between sound-based and motor-based neural representations of a word – accomplished by learning internal models. However, the neural bases of these internal models remain unclear. The aim of this study is to provide experimental evidence for these internal models in the brain during speech production. METHODS/STUDY POPULATION: 16 healthy human adults were recruited for this electrooculography speech study. 20 English pseudowords were designed to vary on confusability along specific features of articulation (place vs manner). All words were controlled for length and voicing. Three task conditions were performed: speech perception, covert and overt speech production. EEG was recorded using a 64-channel Biosemi ActiveTwo system. EMG was recorded on the orbicularis orbis inferior and neck strap muscles. Overt productions were recorded with a high-quality microphone to determine overt production onset. EMG during was used to determine covert production onset. Neuroimaging: Representational Similarity Analysis (RSA), was used to probe the sound- and motor-based neural representations over sensors and time for each task. RESULTS/ANTICIPATED RESULTS: Production (motor) and perception (sound) neural representations were calculated using a cross-validated squared Euclidean distance metric. The RSA results in the speech perception task show a strong selectivity around 150ms, which is compatible with recent human electrocorticography findings in human superior temporal gyrus. Parietal sensors showed a large difference for motor-based neural representations, indicating a strong encoding for production related processes, as hypothesized by previous studies on the ventral and dorsal stream model of language. Temporal sensors, however, showed a large change for both motor- and sound-based neural representations. This is a surprising result since temporal regions are believed to be primarily engaged in perception (sound-based) processes. DISCUSSION/SIGNIFICANCE: This study used neuroimaging (EEG) and advanced multivariate pattern analysis (RSA) to test models of production (motor-) and perception (sound-) based neural representations in three different speech task conditions. These results show strong feasibility of this approach to map how the perception and production processes interact in the brain.
12 - Object Categorization in Man, Monkey, and Machine: Some Answers and Some Open Questions
- Edited by Sven J. Dickinson, University of Toronto, Aleš Leonardis, University of Ljubljana, Bernt Schiele, Technische Universität, Darmstadt, Germany, Michael J. Tarr, Carnegie Mellon University, Pennsylvania
-
- Book:
- Object Categorization
- Published online:
- 20 May 2010
- Print publication:
- 07 September 2009, pp 216-240
-
- Chapter
- Export citation
-
Summary
Introduction
Understanding how the brain performs object categorization is of significant interest for cognitive neuroscience as well as for machine vision. In the past decade, building on earlier efforts (Fukushima 1980; Perrett and Oram 1993; Wallis and Rolls 1997), there has been significant progress in understanding the neural mechanisms underlying object recognition in the brain (Kourtzi and DiCarlo 2006; Peissig and Tarr 2007; Riesenhuber and Poggio 1999b; Riesenhuber and Poggio 2000; Riesenhuber and Poggio 2002; Serre et al. 2007a). There is now a quantitative computational model of the ventral visual pathway in primates that models rapid object recognition (Riesenhuber and Poggio 1999b; Serre et al. 2007a), putatively based on a single feedforward pass through the visual system (Thorpe and Fabre-Thorpe 2001). The model has been validated through a number of experiments that have confirmed nontrivial qualitative and quantitative predictions of the model (Freedman et al. 2003; Gawne and Martin 2002; Jiang et al. 2007; Jiang et al. 2006; Lampl et al. 2004). In the domain of machine vision, the success of the biological model in accounting for human object recognition performance (see, e.g., (Serre et al. 2007b)) has led to the development of a family of biologically inspired machine vision systems (see, e.g., (Marin-Jimenez and Perez de la Blanca, 2006; Meyers and Wolf, 2008; Mutch and Lowe, 2006; Serre et al. 2007c)).